288 research outputs found

    Temporal Robustness against Data Poisoning

    Full text link
    Data poisoning considers cases when an adversary manipulates the behavior of machine learning algorithms through malicious training data. Existing threat models of data poisoning center around a single metric, the number of poisoned samples. In consequence, if attackers can poison more samples than expected with affordable overhead, as in many practical scenarios, they may be able to render existing defenses ineffective in a short time. To address this issue, we leverage timestamps denoting the birth dates of data, which are often available but neglected in the past. Benefiting from these timestamps, we propose a temporal threat model of data poisoning with two novel metrics, earliness and duration, which respectively measure how long an attack started in advance and how long an attack lasted. Using these metrics, we define the notions of temporal robustness against data poisoning, providing a meaningful sense of protection even with unbounded amounts of poisoned samples. We present a benchmark with an evaluation protocol simulating continuous data collection and periodic deployments of updated models, thus enabling empirical evaluation of temporal robustness. Lastly, we develop and also empirically verify a baseline defense, namely temporal aggregation, offering provable temporal robustness and highlighting the potential of our temporal threat model for data poisoning.Comment: 13 pages, 7 figure

    On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks

    Full text link
    The increasing access to data poses both opportunities and risks in deep learning, as one can manipulate the behaviors of deep learning models with malicious training samples. Such attacks are known as data poisoning. Recent advances in defense strategies against data poisoning have highlighted the effectiveness of aggregation schemes in achieving state-of-the-art results in certified poisoning robustness. However, the practical implications of these approaches remain unclear. Here we focus on Deep Partition Aggregation, a representative aggregation defense, and assess its practical aspects, including efficiency, performance, and robustness. For evaluations, we use ImageNet resized to a resolution of 64 by 64 to enable evaluations at a larger scale than previous ones. Firstly, we demonstrate a simple yet practical approach to scaling base models, which improves the efficiency of training and inference for aggregation defenses. Secondly, we provide empirical evidence supporting the data-to-complexity ratio, i.e. the ratio between the data set size and sample complexity, as a practical estimation of the maximum number of base models that can be deployed while preserving accuracy. Last but not least, we point out how aggregation defenses boost poisoning robustness empirically through the poisoning overfitting phenomenon, which is the key underlying mechanism for the empirical poisoning robustness of aggregations. Overall, our findings provide valuable insights for practical implementations of aggregation defenses to mitigate the threat of data poisoning.Comment: 15 page

    Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases

    Full text link
    We present a simple but effective method to measure and mitigate model biases caused by reliance on spurious cues. Instead of requiring costly changes to one's data or model training, our method better utilizes the data one already has by sorting them. Specifically, we rank images within their classes based on spuriosity (the degree to which common spurious cues are present), proxied via deep neural features of an interpretable network. With spuriosity rankings, it is easy to identify minority subpopulations (i.e. low spuriosity images) and assess model bias as the gap in accuracy between high and low spuriosity images. One can even efficiently remove a model's bias at little cost to accuracy by finetuning its classification head on low spuriosity images, resulting in fairer treatment of samples regardless of spuriosity. We demonstrate our method on ImageNet, annotating 50005000 class-feature dependencies (630630 of which we find to be spurious) and generating a dataset of 325k325k soft segmentations for these features along the way. Having computed spuriosity rankings via the identified spurious neural features, we assess biases for 8989 diverse models and find that class-wise biases are highly correlated across models. Our results suggest that model bias due to spurious feature reliance is influenced far more by what the model is trained on than how it is trained.Comment: Accepted to NeurIPS '23 (Spotlight). Camera ready versio

    Effect of AFM nanoindentation loading rate on the characterization of mechanical properties of vascular endothelial cell

    Get PDF
    Vascular endothelial cells form a barrier that blocks the delivery of drugs entering into brain tissue for central nervous system disease treatment. The mechanical responses of vascular endothelial cells play a key role in the progress of drugs passing through the blood–brain barrier. Although nanoindentation experiment by using AFM (Atomic Force Microscopy) has been widely used to investigate the mechanical properties of cells, the particular mechanism that determines the mechanical response of vascular endothelial cells is still poorly understood. In order to overcome this limitation, nanoindentation experiments were performed at different loading rates during the ramp stage to investigate the loading rate effect on the characterization of the mechanical properties of bEnd.3 cells (mouse brain endothelial cell line). Inverse finite element analysis was implemented to determine the mechanical properties of bEnd.3 cells. The loading rate effect appears to be more significant in short-term peak force than that in long-term force. A higher loading rate results in a larger value of elastic modulus of bEnd.3 cells, while some mechanical parameters show ambiguous regulation to the variation of indentation rate. This study provides new insights into the mechanical responses of vascular endothelial cells, which is important for a deeper understanding of the cell mechanobiological mechanism in the blood–brain barrier

    PSO-FNN-Based Vertical Handoff Decision Algorithm in Heterogeneous Wireless Networks

    Get PDF
    AbstractAiming at working out the problem that fuzzy logic and neural network based vertical handoff algorithm didn’t consider the load state reasonably in heterogeneous wireless networks, a PSO-FNN-based vertical handoff decision algorithm is proposed. The algorithm executes factors reinforcement learning for the fuzzy neural network (FNN) with the objective of the equal blocking probability to adapt for load state dynamically, and combined with particle swarm optimization (PSO) algorithm with global optimization capability to set initial parameters in order to improve the precision of parameter learning. The simulation results show that the PSO-FNN algorithm can balance the load of heterogeneous wireless networks effectively and decrease the blocking probability as well as handoff call blocking probability compared to sum-received signal strength (S-RSS) algorithm

    Adversarial Robustness of Learning-based Static Malware Classifiers

    Full text link
    Malware detection has long been a stage for an ongoing arms race between malware authors and anti-virus systems. Solutions that utilize machine learning (ML) gain traction as the scale of this arms race increases. This trend, however, makes performing attacks directly on ML an attractive prospect for adversaries. We study this arms race from both perspectives in the context of MalConv, a popular convolutional neural network-based malware classifier that operates on raw bytes of files. First, we show that MalConv is vulnerable to adversarial patch attacks: appending a byte-level patch to malware files bypasses detection 94.3% of the time. Moreover, we develop a universal adversarial patch (UAP) attack where a single patch can drop the detection rate in constant time of any malware file that contains it by 80%. These patches are effective even being relatively small with respect to the original file size -- between 2%-8%. As a countermeasure, we then perform window ablation that allows us to apply de-randomized smoothing, a modern certified defense to patch attacks in vision tasks, to raw files. The resulting `smoothed-MalConv' can detect over 80% of malware that contains the universal patch and provides certified robustness up to 66%, outlining a promising step towards robust malware detection. To our knowledge, we are the first to apply universal adversarial patch attack and certified defense using ablations on byte level in the malware field
    • …
    corecore